16. Autonomous Mode
Autonomous Mode
Once you have settled on the appropriate analysis for the perception task you'll implement those within the Python scripts provided in the project repository. Here's an overview of how you'll implement autonomous navigation and mapping.
Navigating the Environment
The main script you will use for autonomous navigation and mapping is called drive_rover.py
(you can find it in the code
folder in the project repository). In this section, we'll step through this file more or less line by line so you know what's going on. You are free to modify drive_rover.py
, but it's not required and should run as-is for the purposes of the project.
At the top of the file you have a bunch of imports, but the two most relevant ones to you are these two:
# Import functions for perception and decision making
from perception import perception_step
from decision import decision_step
perception.py
and decision.py
are also included in the project repository and are where you will modify and build out the perception and decision making steps of the process.
These two scripts have already been populated with some starter code. In the case of perception.py
it contains all the functions from the lesson and one empty function called perception_step()
. Your job here is to populate perception_step()
with the appropriate analyses and update the Rover()
object accordingly.
In the case of decision.py
, the function decision_step()
awaits your modification. It contains some example conditional statements that demonstrate how you might make decisions about adjusting throttle, brake and steering inputs. Until you update Rover.nav_angles
the default decision is to have the rover accelerate straight forward up to maximum speed. Once you apply your analysis in perception_step()
and update the Rover.nav_angles
field, you'll see the rover is capable of a bit more complex navigation. It's up to you to build your decision treeā¦ your artificial intelligence to give the rover a brain!
Supporting Functions
After that comes one more import of note:
from supporting_functions import update_rover, create_output_images
Have a look at supporting_functions.py
to see what these functions are doing. In update_rover()
your RoverState()
object gets updated with each new batch of telemetry. The create_output_images()
function is where your Rover.worldmap
is compared with the ground truth map and gets converted, along with Rover.vision_image
, into base64 strings to send back to the rover. You don't need to modify this code, but are welcome to do so if you would like to change what the display images look like, or what is displayed.
SocketIO Server
Next, you'll find some initialization of the SocketIO server that you don't need to worry about (though if you're curious you can learn more here). After that, you're reading in the ground truth map of the environment for comparison with later:
# Read in ground truth map and create 3-channel green version for overplotting
# NOTE: images are read in by default with the origin (0, 0) in the upper left
# and y-axis increasing downward.
ground_truth = mpimg.imread('../calibration_images/map_bw.png')
# This next line creates arrays of zeros in the red and blue channels
# and puts the map into the green channel. This is why the underlying
# map output looks green in the display image
ground_truth_3d = np.dstack((ground_truth*0, ground_truth*255, ground_truth*0)).astype(np.float)
RoverState() and telemetry()
Next you define and initialize your RoverState()
class (as discussed here), which will allow you to keep track of telemetry values and results from your analysis.
Next comes the definition of the telemetry()
function. This function will be run every time the simulator sends a new batch of data (nominally 25 times per second). The first thing this function does is update the Rover()
object with new telemetry values. After that, it calls the perception_step()
and decision_step()
functions to update analysis. Finally, it prepares the commands and images to be sent back to the rover and calls the send_control()
function.
# Define telemetry function for what to do with incoming data
@sio.on('telemetry')
def telemetry(sid, data):
if data:
global Rover
# Initialize / update Rover with current telemetry
Rover, image = update_rover(Rover, data)
if np.isfinite(Rover.vel):
# Execute the perception and decision steps to update the Rover's state
Rover = perception_step(Rover)
Rover = decision_step(Rover)
# Create output images to send to server
out_image_string1, out_image_string2 = create_output_images(Rover)
# The action step! Send commands to the rover!
commands = (Rover.throttle, Rover.brake, Rover.steer)
send_control(commands, out_image_string1, out_image_string2)
# In case of invalid telemetry, send null commands
else:
# Send zeros for throttle, brake and steer and empty images
send_control((0, 0, 0), '', '')
# If you want to save camera images from autonomous driving specify a path
# Example: $ python drive_rover.py image_folder_path
# Conditional to save image frame if folder was specified
if args.image_folder != '':
timestamp = datetime.utcnow().strftime('%Y_%m_%d_%H_%M_%S_%f')[:-3]
image_filename = os.path.join(args.image_folder, timestamp)
image.save('{}.jpg'.format(image_filename))
else:
sio.emit('manual', data={}, skip_sid=True)
You'll notice a section at the bottom of the telemetry function that allows for saving images from your autonomous navigation run if you like. The way to do that is to call drive_rover.py
with an additional argument specifying the folder you want to save images to like this:
$ python drive_rover.py path_to_folder
Launch in Autonomous Mode!
To get started with autonomous driving in the simulator, go ahead and run drive_rover.py
by calling it in the following manner at the terminal prompt (and you should see similar output as displayed below):
python drive_rover.py
NOT recording this run ...
(1439) wsgi starting up on http://0.0.0.0:4567
You can also record images while in autonomous mode by providing a path to where you want to save images when you call drive_rover.py
like this:
python drive_rover.py path_to_folder
Now launch the simulator and click on "Autonomous Mode". You should see the rover take off and start driving, while also publishing images to the screen. It doesn't drive very well yet and it's your job to teach it how to drive better!
Note: at any time in autonomous mode you can take over the controls using the arrow keys. This can help if you want to try out a particular series of maneuvers or get unstuck.